System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering in different fields. Particularly, it is a recurring theme in Reinforcement Learning research, where forward models approximate the state transition function of a Markov Decision Process by learning a mapping function from current state and action to the next state. This problem is commonly defined as a Supervised Learning problem in a direct way. This common approach faces several difficulties due to the inherent complexities of the dynamics to learn, for example, delayed effects, high non-linearity, non-stationarity, partial observability and, more important, error accumulation when using bootstrapped predictions (predictions based on past predictions), over large time horizons. Here we explore the use of Reinforcement Learning in this problem. We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
translated by 谷歌翻译
In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
可以使用具有快速有效分割网络的深度学习方法来实施医疗图像分割。单板计算机(SBC)由于内存和处理限制而难以用于训练深网。诸如Google Edge TPU之类的特定硬件使其适合使用复杂的预训练网络进行实时预测。在这项工作中,我们研究了两个SBC的性能,具有和不进行硬件加速度进行底面图像分割,尽管这项研究的结论可以通过其他类型的医学图像的深层神经网络应用于分割。为了测试硬件加速的好处,我们使用先前已发布的工作中的网络和数据集,并通过使用具有超声甲状腺图像的数据集进行测试来概括它们。我们在SBC中测量预测时间,并将其与基于云的TPU系统进行比较。结果表明,使用Edge TPU,机器学习加速SBC的可行性可加速光盘和杯赛分段,每图像可获得低于25毫秒的时间。
translated by 谷歌翻译
通常考虑使用原型生成(PG)方法来提高$ k $ neart nearbor($ k $ nn)分类器的效率。与初始集合相比,这种方法旨在生成降低的语料库版本,而不会降低分类性能。尽管它们在多类方案中进行了庞大的应用,但很少有作品解决了多标签空间的PG方法的建议。在这方面,这项工作介绍了四种多类PG策略对多标签案例的新颖调整。这些建议通过三个基于$ k $ nn的分类器进行评估,其中12个Corpora包括各种域和语料库大小,以及数据中人为诱导的不同噪声场景。获得的结果表明,所提出的适应能够显着改善(在效率和分类性能方面),唯一的参考文献多标记PG在文献中以及没有应用PG方法的情况,也呈现A在嘈杂的场景中,统计上较高的鲁棒性。此外,这些新颖的PG策略允许通过其配置来优先考虑效率或功效标准,具体取决于目标情况,因此涵盖了以前未被其他作品所填写的解决方案空间中的广泛区域。
translated by 谷歌翻译
移动机器人应该意识到他们的情况,包括对周围环境的深刻理解,以及对自己的状态的估计,成功地做出智能决策并在真实环境中自动执行任务。 3D场景图是一个新兴的研究领域,建议在包含几何,语义和关系/拓扑维度的联合模型中表示环境。尽管3D场景图已经与SLAM技术相结合,以提供机器人的情境理解,但仍需要进一步的研究才能有效地部署它们在板载移动机器人。为此,我们在本文中介绍了一个小说,实时的在线构建情境图(S-Graph),该图在单个优化图中结合在一起,环境的表示与上述三个维度以及机器人姿势一起。我们的方法利用了从3D激光扫描提取的轨道读数和平面表面,以实时构造和优化三层S图,其中包括(1)机器人跟踪层,其中机器人姿势已注册,(2)衡量标准。语义层具有诸如平面壁和(3)我们的新颖拓扑层之类的特征,从而使用高级特征(例如走廊和房间)来限制平面墙。我们的建议不仅证明了机器人姿势估计的最新结果,而且还以度量的环境模型做出了贡献
translated by 谷歌翻译
在本文中,我们开发FaceQVEC,一种软件组件,用于估计ISO / IEC 19794-5中所考虑的每个要点的面部图像的符合性,这是一个质量标准,该标准定义了将它们可接受或不可接受的面部图像的一般质量指南用于官方文件,如护照或身份证。这种质量评估的工具可以有助于提高面部识别的准确性,并确定哪些因素影响给定的面部图像的质量,并采取行动消除或减少这些因素,例如,具有后处理技术或重新获取图像。 FaceQVEC由与上述标准中预期的不同点相关的25个单独测试的自动化,以及被认为与面部质量有关的图像的其他特征。我们首先包括在现实条件下捕获的开发数据集上评估的质量测试的结果。我们使用这些结果来调整每个测试的判定阈值。然后,我们再次在评估数据库中再次检查,该评估数据库包含在开发期间未见的新脸部图像。评估结果展示了个人测试的准确性,用于检查遵守ISO / IEC 19794-5。 Faceqvec可在线获取(https://github.com/uam-biometrics/faceqvec)。
translated by 谷歌翻译
Neural network pruning-the task of reducing the size of a network by removing parameters-has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译